翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Leftover hash-lemma : ウィキペディア英語版
Leftover hash lemma
The leftover hash lemma is a lemma in cryptography first stated by Russell Impagliazzo, Leonid Levin, and Michael Luby.
Imagine that you have a secret key \scriptstyle X that has \scriptstyle n uniform random bits, and you would like to use this secret key to encrypt a message. Unfortunately, you were a bit careless with the key, and know that an adversary was able to learn about \scriptstyle t \;<\; n bits of that key, but you do not know which. Can you still use your key, or do you have to throw it away and choose a new key? The leftover hash lemma tells us that we can produce a key of about \scriptstyle n \,-\, t bits, over which the adversary has almost no knowledge. Since the adversary knows all but \scriptstyle n \,-\, t bits, this is almost optimal.
More precisely, the leftover hash lemma tells us that we can extract a length asymptotic to \scriptstyle H_\infty(X) (the min-entropy of \scriptstyle X) bits from a random variable \scriptstyle X that are almost uniformly distributed. In other words, an adversary who has some partial knowledge about \scriptstyle X, will have almost no knowledge about the extracted value. That is why this is also called privacy amplification (see privacy amplification section in the article Quantum key distribution).
Randomness extractors achieve the same result, but use (normally) less randomness.
Let \scriptstyle X be a random variable over \scriptstyle \mathcal X and let \scriptstyle m \;>\; 0. Let \scriptstyle h :\; \mathcal \,\times\, \mathcal \;\rightarrow\; \^m be a 2-universal hash function. If
:m \leq H_\infty(X) - 2 \log\left(\frac\right)
then for \scriptstyle S uniform over \scriptstyle \mathcal S and independent of \scriptstyle X, we have
:\delta(X), S), (U, S) ) \leq \varepsilon
where \scriptstyle U is uniform over \scriptstyle \^m and independent of \scriptstyle S.
\scriptstyle H_\infty(X) \;=\; -\log \max_x \Pr() is the Min-entropy of \scriptstyle X, which measures the amount of randomness \scriptstyle X has. The min-entropy is always less than or equal to the Shannon entropy. Note that \scriptstyle \max_x \Pr() is the probability of correctly guessing \scriptstyle X. (The best guess is to guess the most probable value.) Therefore, the min-entropy measures how difficult it is to guess \scriptstyle X.
\scriptstyle \delta(X,\, Y) \;=\; \frac \sum_v \left | \Pr() \,-\, \Pr() \right | is a statistical distance between \scriptstyle X and \scriptstyle Y.
==See also==

* Universal hashing
* Min-entropy
* Rényi entropy
* Information theoretic security

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Leftover hash lemma」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.